Solving strongly convex-concave composite saddle-point problems with low dimension of one group of variable

نویسندگان

چکیده

Algorithmic methods are developed that guarantee efficient complexity estimates for strongly convex-concave saddle-point problems in the case when one group of variables has a high dimension, while another rather low dimension (up to 100). These based on reducing this type minimization (maximization) problem convex (concave) functional with respect such an approximate value gradient at arbitrary point can be obtained required accuracy using auxiliary optimization subproblem other variable. It is proposed use ellipsoid method and Vaidya's low-dimensional accelerated inexact information about or subgradient high-dimensional problems. In variables, ranging over hypercube, very five), approach efficient. This new version multidimensional analogue Nesterov's square (the dichotomy method) possibility values objective functional. Bibliography: 28 titles.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems

This article proposes a new algorithm for solving a class of composite convex-concave saddlepoint problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for any constant choice of proximal stepsize. More...

متن کامل

A simple algorithm for a class of nonsmooth convex-concave saddle-point problems

This supplementary material includes numerical examples demonstrating the flexibility and potential of the algorithm PAPC developed in the paper. We show that PAPC does behave numerically as predicted by the theory, and can efficiently solve problems which cannot be solved by well known state of the art algorithms sharing the same efficiency estimate. Here for illustration purposes, we compare ...

متن کامل

Saddle Point Seeking for Convex Optimization Problems

In this paper, we consider convex optimization problems with constraints. By combining the idea of a Lie bracket approximation for extremum seeking systems and saddle point algorithms, we propose a feedback which steers a single-integrator system to the set of saddle points of the Lagrangian associated to the convex optimization problem. We prove practical uniform asymptotic stability of the se...

متن کامل

Preconditioned Douglas-Rachford Splitting Methods for Convex-concave Saddle-point Problems

We propose a preconditioned version of the Douglas-Rachford splitting method for solving convexconcave saddle-point problems associated with Fenchel-Rockafellar duality. It allows to use approximate solvers for the linear subproblem arising in this context. We prove weak convergence in Hilbert space under minimal assumptions. In particular, various efficient preconditioners are introduced in th...

متن کامل

An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

This paper describes an accelerated HPE-type method based on general Bregman distances for solving monotone saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [28] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm pre...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Sbornik Mathematics

سال: 2023

ISSN: ['1064-5616', '1468-4802']

DOI: https://doi.org/10.4213/sm9700e